1,359 research outputs found

    TaskInsight: Understanding Task Schedules Effects on Memory and Performance

    Get PDF
    Recent scheduling heuristics for task-based applications have managed to improve their by taking into account memory-related properties such as data locality and cache sharing. However, there is still a general lack of tools that can provide insights into why, and where, different schedulers improve memory behavior, and how this is related to the applications' performance. To address this, we present TaskInsight, a technique to characterize the memory behavior of different task schedulers through the analysis of data reuse between tasks. TaskInsight provides high-level, quantitative information that can be correlated with tasks' performance variation over time to understand data reuse through the caches due to scheduling choices. TaskInsight is useful to diagnose and identify which scheduling decisions affected performance, when were they taken, and why the performance changed, both in single and multi-threaded executions. We demonstrate how TaskInsight can diagnose examples where poor scheduling caused over 10% difference in performance for tasks of the same type, due to changes in the tasks' data reuse through the private and shared caches, in single and multi-threaded executions of the same application. This flexible insight is key for optimization in many contexts, including data locality, throughput, memory footprint or even energy efficiency.We thank the reviewers for their feedback. This work was supported by the Swedish Research Council, the Swedish Foundation for Strategic Research project FFL12-0051 and carried out within the Linnaeus Centre of Excellence UPMARC, Uppsala Programming for Multicore Architectures Research Center. This paper was also published with the support of the HiPEAC network that received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 687698.Peer ReviewedPostprint (published version

    Spatial and Temporal Cache Sharing Analysis in Tasks

    Get PDF
    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) Timisoara, Romania. February 8-11, 2016.Understanding performance of large scale multicore systems is crucial for getting faster execution times and optimize workload efficiency, but it is becoming harder due to the increased complexity of hardware architectures. Cache sharing is a key component for performance in modern architectures, and it has been the focus of performance analysis tools and techniques in recent years. At the same time, new programming models have been introduced to aid the programmer dealing with the complexity of large scale systems, simplifying the coding process and making applications more scalable regardless of resource sharing. Taskbased runtime systems are one example of this that became popular recently. In this work we develop models to tackle performance analysis of shared resources in the task-based context, and for that we study cache sharing both in temporal and spatial ways. In temporal cache sharing, the effect of data reused over time by the tasks executed is modeled to predict different scenarios resulting in a tool called StatTask. In spatial cache sharing, the effect of tasks fighting for the cache at a given point in time through their execution is quantified and used to model their behavior on arbitrary cache sizes. Finally, we explain how these tools set up a unique and solid platform to improve runtime systems schedulers, maximizing performance of execution of large-scale task-based applications.European Cooperation in Science and Technology. COSTThe work presented in this paper has been partially supported by EU under the COST programme Action IC1305,‘Network for Sustainable Ultrascale Computing (NESUS)’, and by the Swedish Research Council, carried out within the Linnaeus centre of excellence UPMARC, Uppsala Programming for Multicore Architectures Research Center

    Generalized empirical Bayesian methods for discovery of differential data in high-throughput biology

    Get PDF
    Motivation: High-throughput data are now commonplace in biological research. Rapidly changing technologies and application mean that novel methods for detecting differential behaviour that account for a ‘large P, small n’ setting are required at an increasing rate. The development of such methods is, in general, being done on an ad hoc basis, requiring further development cycles and a lack of standardization between analyses. Results: We present here a generalized method for identifying differential behaviour within high-throughput biological data through empirical Bayesian methods. This approach is based on our baySeq algorithm for identification of differential expression in RNA-seq data based on a negative binomial distribution, and in paired data based on a beta-binomial distribution. Here we show how the same empirical Bayesian approach can be applied to any parametric distribution, removing the need for lengthy development of novel methods for differently distributed data. Comparisons with existing methods developed to address specific problems in high-throughput biological data show that these generic methods can achieve equivalent or better performance. A number of enhancements to the basic algorithm are also presented to increase flexibility and reduce computational costs. Availability and implementation: The methods are implemented in the R baySeq (v2) package, available on Bioconductor http://www.bioconductor.org/packages/release/bioc/html/baySeq.html. Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.This work was supported by European Research Council Advanced Investigator Grant ERC-2013-AdG 340642 – TRIBE.This is the author accepted manuscript. The final version is available from Oxford University Press via http://dx.doi.org/10.1093/bioinformatics/btv56

    The HIPEAC vision for advanced computing in horizon 2020

    Get PDF

    An Innovative Interactive Modeling Tool to Analyze Scenario-Based Physician Workforce Supply and Demand

    Get PDF
    Effective physician workforce management requires that the various organizations comprising the House of Medicine be able to assess their current and future workforce supply. This information has direct relevance to funding of graduate medical education. We describe a dynamic modeling tool that examines how individual factors and practice variables can be used to measure and forecast the supply and demand for existing and new physician services. The system we describe, while built to analyze the pathologist workforce, is sufficiently broad and robust for use in any medical specialty. Our design provides a computer-based software model populated with data from surveys and best estimates by specialty experts about current and new activities in the scope of practice. The model describes the steps needed and data required for analysis of supply and demand. Our modeling tool allows educators and policy makers, in addition to physician specialty organizations, to assess how various factors may affect demand (and supply) of current and emerging services. Examples of factors evaluated include types of professional services (3 categories with 16 subcategories), service locations, elements related to the Patient Protection and Affordable Care Act, new technologies, aging population, and changing roles in capitated, value-based, and team-based systems of care. The model also helps identify where physicians in a given specialty will likely need to assume new roles, develop new expertise, and become more efficient in practice to accommodate new value-based payment model

    Beaver-Dredged Canals and their Spacial Relationship to Beaver-Cut Stumps

    Get PDF
    Castor canadensis Kuhl (North American beavers) are central place foragers whocollect woody plants and building materials from their surroundings and return to a main body of water containing a lodge or food cache. It has been suggested that beavers dredge water-filled canals to extend access to foraging areas; however, the possibility that these engineered transportation routes function as extensions to the beavers\u27 central place has yet to be considered. Our objective in this study was to gain a better understanding of the formation and utilization of canals by beavers and thus further elucidate the complex foraging behavior of these ecosystem engineers. During 2004–2011, we mapped beaver ponds, canals, and cut stumps in eight groundwater-fed wetlands, from at least four separate colonies, in Indianapolis, IN. We found that the mean length, depth, and width of the beaver-dredged canals were 604.3 6 493.1 m, 28.0 6 22.2 cm, and 107.7 6 107.1 cm respectively. Two of the canal systems were mapped for multiple years and their length, depth, and width increased over time and supported the prediction that beavers continuously engineer these canal systems to extend their foraging area into new locations. In addition, and in contrast to previous studies, we found that the number of beaver-cut stumps was negatively related to distance from canals, but not from the body of water containing their lodges. We recommend that studies of optimal foraging in beavers take canals into account, where applicable, when relating foraging to distance from the central place

    Revised nomenclature for the mammalian long-chain acyl-CoA synthetase gene family.

    Get PDF
    By consensus, the acyl-CoA synthetase (ACS) community, with the advice of the human and mouse genome nomenclature committees, has revised the nomenclature for the mammalian long-chain acyl-CoA synthetases. ACS is the family root name, and the human and mouse genes for the long-chain ACSs are termed ACSL1,3-6 and Acsl1,3-6, respectively. Splice variants of ACSL3, -4, -5, and -6 are cataloged. Suggestions for naming other family members and for the nonmammalian acyl-CoA synthetases are made

    Evidence for the Higgs-boson Yukawa coupling to tau leptons with the ATLAS detector

    Get PDF
    Results of a search for H → τ τ decays are presented, based on the full set of proton-proton collision data recorded by the ATLAS experiment at the LHC during 2011 and 2012. The data correspond to integrated luminosities of 4.5 fb−1 and 20.3 fb−1 at centre-of-mass energies of √s = 7 TeV and √s = 8 TeV respectively. All combinations of leptonic (τ → `νν¯ with ` = e, µ) and hadronic (τ → hadrons ν) tau decays are considered. An excess of events over the expected background from other Standard Model processes is found with an observed (expected) significance of 4.5 (3.4) standard deviations. This excess provides evidence for the direct coupling of the recently discovered Higgs boson to fermions. The measured signal strength, normalised to the Standard Model expectation, of µ = 1.43 +0.43 −0.37 is consistent with the predicted Yukawa coupling strength in the Standard Model

    Measurements of fiducial and differential cross sections for Higgs boson production in the diphoton decay channel at s√=8 TeV with ATLAS

    Get PDF
    Measurements of fiducial and differential cross sections are presented for Higgs boson production in proton-proton collisions at a centre-of-mass energy of s√=8 TeV. The analysis is performed in the H → γγ decay channel using 20.3 fb−1 of data recorded by the ATLAS experiment at the CERN Large Hadron Collider. The signal is extracted using a fit to the diphoton invariant mass spectrum assuming that the width of the resonance is much smaller than the experimental resolution. The signal yields are corrected for the effects of detector inefficiency and resolution. The pp → H → γγ fiducial cross section is measured to be 43.2 ±9.4(stat.) − 2.9 + 3.2 (syst.) ±1.2(lumi)fb for a Higgs boson of mass 125.4GeV decaying to two isolated photons that have transverse momentum greater than 35% and 25% of the diphoton invariant mass and each with absolute pseudorapidity less than 2.37. Four additional fiducial cross sections and two cross-section limits are presented in phase space regions that test the theoretical modelling of different Higgs boson production mechanisms, or are sensitive to physics beyond the Standard Model. Differential cross sections are also presented, as a function of variables related to the diphoton kinematics and the jet activity produced in the Higgs boson events. The observed spectra are statistically limited but broadly in line with the theoretical expectations
    corecore